critical harm
California's AI Act Vetoed
Under SB 1047, developers of very large frontier models (defined as models trained on computing power greater than 1026 integer or floating point operations or costing more than 100 million at the start of training) and those who fine-tune large frontier models (also measured by compute requirements and/or training costs) would be responsible to ensure that these models will not cause "critical harms." Other comparably grave harms to public safety and security. Under this bill, developers of large frontier models would be required to take numerous steps at three phases of development: some before training, some before use of such a model or making it available, and some during uses of covered models. Among the required steps would be installing a "kill switch" at the pre-training stage, taking reasonable measures to prevent models from posing unreasonable risks, and publishing redacted copies of the developers' safety and security protocols. Developers would also be required to hire independent third-party auditors to ensure compliance with the law's requirements.
- Law (1.00)
- Government (1.00)
- Information Technology > Security & Privacy (0.61)
A New York legislator wants to pick up the pieces of the dead California AI bill
Now Bores hopes to revive the battle. The main provisions in the RAISE Act include requiring AI companies to develop safety plans for the development and deployment of their models. The bill also provides protections for whistleblowers at AI companies. It forbids retaliation against an employee who shares information about an AI model in the belief that it may cause "critical harm"; such whistleblowers can report the information to the New York attorney general. One way the bill defines critical harm is the use of an AI model to create a chemical, biological, radiological, or nuclear weapon that results in the death or serious injury of 100 or more people.
- North America > United States > New York (0.67)
- North America > United States > California (0.40)
- Law > Government & the Courts (0.64)
- Government > Military (0.40)
California Gov. Newsom vetoes bill SB 1047 that aims to prevent AI disasters
California Gov. Gavin Newsom has vetoed bill SB 1047, which aims to prevent bad actors from using AI to cause "critical harm" to humans. The California state assembly passed the legislation by a margin of 41-9 on August 28, but several organizations including the Chamber of Commerce had urged Newsom to veto the bill. In his veto message on Sept. 29, Newsom said the bill is "well-intentioned" but "does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it." SB 1047 would have made the developers of AI models liable for adopting safety protocols that would stop catastrophic uses of their technology.
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)